58 research outputs found

    Learning assistive teleoperation behaviors from demonstration

    Get PDF
    Emergency response in hostile environments often involves remotely operated vehicles (ROVs) that are teleoperated as interaction with the environment is typically required. Many ROV tasks are common to such scenarios and are often recurrent. We show how a probabilistic approach can be used to learn a task behavior model from data. Such a model can then be used to assist an operator performing the same task in future missions. We show how this approach can capture behaviors (constraints) that are present in the training data, and how this model can be combined with the operator’s input online. We present an illustrative planar example and elaborate with a non-Destructive testing (NDT) scanning task on a teleoperation mock-up using a two-armed Baxter robot. We demonstrate how our approach can learn from examples task specific behaviors and automatically control the overall system, combining the operator’s input and the learned model online, in an assistive teleoperation manner. This can potentially reduce the time and effort required to perform teleoperation tasks that are commonplace to ROV missions in the context of security, maintenance and rescue robotics

    Supervisory teleoperation with online learning and optimal control

    Get PDF
    We present a general approach for online learning and optimal control of manipulation tasks in a supervisory teleoperation context, targeted to underwater remotely operated vehicles (ROVs). We use an online Bayesian nonparametric learning algorithm to build models of manipulation motions as task-parametrized hidden semi-Markov models (TP-HSMM) that capture the spatiotemporal characteristics of demonstrated motions in a probabilistic representation. Motions are then executed autonomously using an optimal controller, namely a model predictive control (MPC) approach in a receding horizon fashion. This way the remote system locally closes a high-frequency control loop that robustly handles noise and dynamically changing environments. Our system automates common and recurring tasks, allowing the operator to focus only on the tasks that genuinely require human intervention. We demonstrate how our solution can be used for a hot-stabbing motion in an underwater teleoperation scenario. We evaluate the performance of the system over multiple trials and compare with a state-of-the-art approach. We report that our approach generalizes well with only a few demonstrations, accurately performs the learned task and adapts online to dynamically changing task conditions

    Learning task-space synergies using Riemannian geometry

    Get PDF
    In the context of robotic control, synergies can form elementary units of behavior. By specifying taskdependent coordination behaviors at a low control level, one can achieve task-specific disturbance rejection. In this work we present an approach to learn the parameters of such lowlevel controllers by demonstration. We identify a synergy by extracting covariance information from demonstration data. The extracted synergy is used to derive a time-invariant state feedback controller through optimal control. To cope with the non-Euclidean nature of robot poses, we utilize Riemannian geometry, where both estimation of the covariance and the associated controller take into account the geometry of the pose manifold. We demonstrate the efficacy of the approach experimentally in a bimanual manipulation task

    Trajectory and Foothold Optimization using Low-Dimensional Models for Rough Terrain Locomotion

    Get PDF
    We present a trajectory optimization framework for legged locomotion on rough terrain. We jointly optimize the center of mass motion and the foothold locations, while considering terrain conditions. We use a terrain costmap to quantify the desirability of a foothold location. We increase the gait's adaptability to the terrain by optimizing the step phase duration and modulating the trunk attitude, resulting in motions with guaranteed stability. We show that the combination of parametric models, stochastic-based exploration and receding horizon planning allows us to handle the many local minima associated with different terrain conditions and walking patterns. This combination delivers robust motion plans without the need for warm-starting. Moreover, we use soft-constraints to allow for increased flexibility when searching in the cost landscape of our problem. We showcase the performance of our trajectory optimization framework on multiple terrain conditions and validate our method in realistic simulation scenarios and experimental trials on a hydraulic, torque controlled quadruped robot

    History and Actuality of Galician Emigrants: A Galicia (Spain) Shared between Latin America and Europe

    Get PDF
    Despite the significant advances in path planning methods, problems involving highly constrained spaces are still challenging. In particular, in many situations the configuration space is a non-parametrizable variety implicitly defined by constraints, which complicates the successful generalization of sampling-based path planners. In this paper, we present a new path planning algorithm specially tailored for highly constrained systems. It builds on recently developed tools for Higher-dimensional Continuation, which provide numerical procedures to describe an implicitly defined variety using a set of local charts. We propose to extend these methods to obtain an efficient path planner on varieties, handling highly constrained problems. The advantage of this planner comes from that it directly operates into the configuration space and not into the higher-dimensional ambient space, as most of the existing methods do.Postprint (author’s final draft

    Review of the techniques used in motor‐cognitive human‐robot skill transfer

    Get PDF
    Abstract A conventional robot programming method extensively limits the reusability of skills in the developmental aspect. Engineers programme a robot in a targeted manner for the realisation of predefined skills. The low reusability of general‐purpose robot skills is mainly reflected in inability in novel and complex scenarios. Skill transfer aims to transfer human skills to general‐purpose manipulators or mobile robots to replicate human‐like behaviours. Skill transfer methods that are commonly used at present, such as learning from demonstrated (LfD) or imitation learning, endow the robot with the expert's low‐level motor and high‐level decision‐making ability, so that skills can be reproduced and generalised according to perceived context. The improvement of robot cognition usually relates to an improvement in the autonomous high‐level decision‐making ability. Based on the idea of establishing a generic or specialised robot skill library, robots are expected to autonomously reason about the needs for using skills and plan compound movements according to sensory input. In recent years, in this area, many successful studies have demonstrated their effectiveness. Herein, a detailed review is provided on the transferring techniques of skills, applications, advancements, and limitations, especially in the LfD. Future research directions are also suggested

    Learning from demonstration for semi-autonomous teleoperation

    No full text
    Teleoperation in domains such as deep-sea or space often requires the completion of a set of recurrent tasks. We present a framework that uses a probabilistic approach to learn from demonstration models of manipulation tasks. We show how such a framework can be used in an underwater ROV teleoperation context to assist the operator. The learned representation can be used to resolve inconsistencies between the operator’s and the robot’s space in a structured manner, and as a fall-back system to perform previously learned tasks autonomously when teleoperation is not possible. We evaluate our framework with a realistic ROV task on a teleoperation mock-up with a group of volunteers, showing a significant decrease in time to complete the task when our approach is used. In addition, we illustrate how the system can execute previously learned tasks autonomously when the communication with the operator is lost

    Supervisory teleoperation with online learning and optimal control

    No full text
    We present a general approach for online learning and optimal control of manipulation tasks in a supervisory teleoperation context, targeted to underwater remotely operated vehicles (ROVs). We use an online Bayesian nonparametric learning algorithm to build models of manipulation motions as task-parametrized hidden semi-Markov models (TP-HSMM) that capture the spatiotemporal characteristics of demonstrated motions in a probabilistic representation. Motions are then executed autonomously using an optimal controller, namely a model predictive control (MPC) approach in a receding horizon fashion. This way the remote system locally closes a high-frequency control loop that robustly handles noise and dynamically changing environments. Our system automates common and recurring tasks, allowing the operator to focus only on the tasks that genuinely require human intervention. We demonstrate how our solution can be used for a hot-stabbing motion in an underwater teleoperation scenario. We evaluate the performance of the system over multiple trials and compare with a state-of-the-art approach. We report that our approach generalizes well with only a few demonstrations, accurately performs the learned task and adapts online to dynamically changing task conditions

    Learning from demonstration for semi-autonomous teleoperation

    No full text
    Teleoperation in domains such as deep-sea or space often requires the completion of a set of recurrent tasks. We present a framework that uses a probabilistic approach to learn from demonstration models of manipulation tasks. We show how such a framework can be used in an underwater ROV teleoperation context to assist the operator. The learned representation can be used to resolve inconsistencies between the operator's and the robot's space in a structured manner, and as a fall-back system to perform tasks autonomously when teleoperation is not possible. We evaluate our framework with a realistic ROV task on a teleoperation mock-up with a group of volunteers, showing a significant decrease in time to complete the task when our approach is used. In addition, we illustrate how the system can execute tasks autonomously when the communication with the operator is lost

    Learning sequences of approximations for hierarchical motion planning

    No full text
    The process of designing hierarchical motion planners typically involves problem-specific intuition and implementations. This process is sub-optimal both in terms of solution space (amount of possibilities for search-space approximations, choice of planner parameters, etc) and amount of human labour. In this paper we show that the design of hierarchical motion planners does not have to be manual. We present a method for parameterizing and then optimizing sequences of problem approximations used in hierarchical motion planning. We define these as a specific kind of graph with intermediate state-spaces and solutions as nodes, and costs and planner parameters as edge properties. These properties become a continuous optimization variable that changes the sequence and parameters of sub-planners in the hierarchy. Using Pareto-front estimation, our method automatically discovers multiple designs of optimal computation-time/motion-cost trade-offs. We evaluate the method on a set of legged robot motion planning problems where hand-designed hierarchies are abundant. Our method discovers sequences of problem approximations which achieve similar—though slightly higher—performance than the best human-designed hierarchies. The performance gain significantly increases on new problems, yielding 12x faster computation times and 10% higher success rates
    • 

    corecore